Goto

Collaborating Authors

 erase unconscious bias


Speaker Series: Erase Unconscious Bias From Your AI Datasets

#artificialintelligence

The ACM Washington DC Speaker Series presents: "Erase Unconscious Bias From Your AI Datasets" by Lauren Maffeo 6PM - Networking 7PM - Presentation SUMMARY: A young child defines the world purely on the small amount they can see. This is the root of dataset bias: intelligence based on information that's too small or homogenous. Advances in AI technologies like machine learning and deep neural networks have the potential to save time and boost productivity, but what happens if we train these technologies using datasets that exclude large portions of the population? For example, some facial recognition software doesn't acknowledge dark skin. People of color were excluded from the datasets that were used to train the software.


Erase unconscious bias from your AI datasets

#artificialintelligence

Artificial intelligence failures often generate a lot of laughs when they make silly mistakes like this goofy photo. However, "the problem is that machine learning gaffes aren't always funny … They can have pretty serious consequences for end users when the datasets that are used to train these machine learning algorithms aren't diverse enough," says Lauren Maffeo, a senior content analyst at GetApp. In her Lightning Talk, "Erase unconscious bias from your AI datasets," at All Things Open 2018, October 23 in Raleigh, NC, Lauren describes some of the grim implications and advocated for developers to take measures to protect people from machine learning and artificial intelligence bias. To learn more about this issue, watch Lauren's talk and read her Opensource.com